arrow_left_alt Back to Notes

Numerical Analysis of Dynamical Systems: The Euler Method

Originally an optional exercise in model discretization for the Modeling of Dynamical Systems course (2024/2025).

Note: While only the basic numerical form of the LV and Lorenz models was requested, the formal derivation and comprehensive error analysis presented here were self-initiated to explore the mathematical foundations of the Euler scheme.

Abstract

This report presents a formal derivation of the Euler method and an analysis of its local and global truncation errors. The scheme is applied to the numerical simulation of dynamical systems, specifically the Lotka-Volterra and Lorenz models. As established in large-scale simulation frameworks like NEST[1] and in computationally efficient neuron models such as Izhikevich's[2], the forward Euler scheme is frequently chosen as a "golden mean." It allows for the simulation of massive neural ensembles where the computational overhead of higher-order methods is prohibitive, and the inherent stochasticity of biological systems makes extreme numerical precision unnecessary.


1. Euler Method - Theory

Let us consider an ordinary differential equation:

$$ y'(t) = f(t, y(t)) $$

for which an initial condition \( y(t_0) = y_0 \) is given, and \( t \in [0, T] \).

The function \( y(t) \) can be expanded into a Taylor series around the point \( t_0 \) for \( t_1 = t_0 + \Delta t \):

$$ y(t_1) = y(t_0 + \Delta t) = y(t_0) + \Delta t\ y'(t_0) + \frac{(\Delta t)^2}{2!} y''(t_0) + \frac{(\Delta t)^3}{3!} y'''(t_0) + \dots $$

By omitting the higher-order terms, we obtain the approximation:

$$ y(t_1) = y(t_0 + \Delta t) = y(t_0) + \Delta t \ y'(t_0) + \mathcal{O}(\Delta t^2) $$
$$ y(t_1) \approx y(t_0) + \Delta t \ y'(t_0) $$

Next, by substituting the function \( f(t, y(t)) \) for \( y'(t) \), we get:

$$ y(t_1) \approx y(t_0) + \Delta t \ f(t_0, y(t_0)) $$
$$ y_1 = y_0 + \Delta t \ f(t_0, y_0) $$

Similarly, we can determine the subsequent terms for \( t_2 = t_1 + \Delta t, \dots \)

$$ \begin{aligned} y_2 &= y_1 + \Delta t \ f(t_1, y_1) \\ y_3 &= y_2 + \Delta t \ f(t_2, y_2) \\ &\vdots \\ y_n &= y_{n-1} + \Delta t \ f(t_{n-1}, y_{n-1}) \end{aligned} $$

The above iterations can be written in the form of a general formula, yielding:

$$ y_{i + 1} = y_{i} + \Delta t \ f(t_{i}, y_{i}) \quad \text{for } i \in \{0, \dots, n - 1\} $$

The equation can naturally be generalized for multidimensional systems \( \mathbf{x}'(t) = \mathbf{f}(t, \mathbf{x}(t)) \), which in its expanded form gives:

$$ \begin{cases} x_1'(t) = f_1(t, x_1(t), x_2(t), \dots, x_n(t)) \\ \vdots \\ x_n'(t) = f_n(t, x_1(t), x_2(t), \dots, x_n(t)) \end{cases} $$

Returning to a compact vector form, the algorithm takes the form:

$$ \mathbf{x}_{i+1} = \mathbf{x}_{i} + \Delta t \ \mathbf{f}(t_i, \mathbf{x}_i) $$

2. Lotka-Volterra Model

Method 1: Direct Vector Formula

The Lotka-Volterra predator-prey model can be represented as a system:

$$ \begin{cases} x'(t) = f(t, x(t), y(t)) \\ y'(t) = g(t, x(t), y(t)) \end{cases} $$

From the original LV equations:

$$ \begin{cases} \dot{x} = (\alpha - \beta y) x \\ \dot{y} = (\gamma x - \delta) y \end{cases} $$

we obtain the discrete system:

$$ \begin{cases} x_{i+1} = x_i + \Delta t \ (\alpha - \beta y_i)x_i \\ y_{i+1} = y_i + \Delta t \ (\gamma x_i - \delta) y_i \end{cases} $$

Method 2: Finite Differences

Discretization can also be approached from the perspective of finite differences. Given the continuous equations, we replace the derivatives with forward difference approximations.

For the first equation:

$$ \frac{\Delta x_i}{\Delta t} = (\alpha - \beta y_i ) \ x_i $$
$$ \frac{x_{i + 1} - x_{i}}{\Delta t} = (\alpha - \beta y_i ) \ x_i $$
$$ x_{i + 1} = x_{i} + (\alpha - \beta y_{i}) \ x_{i} \ \Delta t $$

Similarly for the second equation:

$$ \frac{\Delta y_i}{\Delta t} = (\gamma x_i - \delta) \ y_i $$
$$ y_{i + 1} = y_{i} + (\gamma x_i - \delta) \ y_i \ \Delta t $$

3. Lorenz System

The Lorenz system can be represented as a three-dimensional system. Taking the original equations:

$$ \begin{cases} \dot{x} = \sigma (y - x) \\ \dot{y} = x ( \rho - z) - y \\ \dot{z} = xy - \beta z \end{cases} $$

we obtain the discrete numerical scheme:

$$ \begin{cases} x_{i + 1} = x_i + \Delta t \ \sigma (y_i - x_i) \\ y_{i + 1} = y_i + \Delta t \ (x_i(\rho - z_i) - y_i) \\ z_{i + 1} = z_i + \Delta t \ (x_i y_i - \beta z_i) \end{cases} $$

4. Euler Method - Error Analysis

The error of the method can be rigorously analyzed using the Taylor series with a remainder. For a vector function \( \mathbf{x}(t) \), expanding into a Taylor series with a remainder of the second degree for \( t = t_{i + 1} \) and \( a = t_i \), we obtain:

$$ \mathbf{x}(t_{i + 1}) = \mathbf{x}(t_i) + \mathbf{x}'(t_i) \Delta t + \int_{t_i}^{t_{i + 1}} (t_{i + 1} - \tau) \mathbf{x}''(\tau) d\tau $$

The magnitude of the local truncation error can be estimated using the norm of the remainder. From the mean value theorem:

$$ \begin{aligned} \bigg\| \int_{t_i}^{t_{i + 1}} (t_{i + 1} - \tau) \mathbf{x}''(\tau) d\tau \bigg\| &\leq \int_{t_i}^{t_{i + 1}} \big(t_{i + 1} - \tau\big) \| \mathbf{x}''(\tau) \| d\tau \\ &\leq \max_{t_i \leq c \leq t_{i+1}} \| \mathbf{x}''(c) \| \frac{(\Delta t)^2}{2} \end{aligned} $$

Thus, we can write the remainder integral as a vector:

$$ \int_{t_i}^{t_{i + 1}} (t_{i + 1} - \tau) \mathbf{x}''(\tau) d\tau = \frac{(\Delta t)^2}{2} \boldsymbol{\xi}_i $$

where \( \|\boldsymbol{\xi}_i\| \leq \max \| \mathbf{x}''(c) \| \).

The exact value of the solution satisfies:

$$ \mathbf{x}(t_{i + 1}) = \mathbf{x}(t_i) + \Delta t \ \mathbf{f}(t_i, \mathbf{x}(t_i)) + \frac{(\Delta t)^2}{2} \boldsymbol{\xi}_i $$

Subtracting our Euler method iterative formula \( \mathbf{x}_{i+1} = \mathbf{x}_{i} + \Delta t \ \mathbf{f}(t_i, \mathbf{x}_i) \), we introduce the global truncation error vector \( \mathbf{e}_j = \mathbf{x}(t_j) - \mathbf{x}_j \):

$$ \mathbf{e}_{i + 1} = \mathbf{e}_{i} + \Delta t \big[ \mathbf{f}(t_i, \mathbf{x}(t_i)) - \mathbf{f}(t_i, \mathbf{x}_i) \big] + \frac{(\Delta t)^2}{2} \boldsymbol{\xi}_i $$

Assuming that the function \( \mathbf{f} \) satisfies the Lipschitz condition with constant \( L \) with respect to the second argument:

$$ \begin{aligned} \| \mathbf{e}_{i + 1} \| &\leq \|\mathbf{e}_{i}\| + \Delta t \ \| \mathbf{f}(t_i, \mathbf{x}(t_i)) - \mathbf{f}(t_i, \mathbf{x}_i) \| + \frac{(\Delta t)^2}{2} \|\boldsymbol{\xi}_i\| \\ &\leq \|\mathbf{e}_{i}\| + \Delta t \ L \ \| \mathbf{x}(t_i) - \mathbf{x}_i \| + \frac{(\Delta t)^2}{2}\| \boldsymbol{\xi}_i\| \\ &\leq (1 + \Delta t \ L) \|\mathbf{e}_i\| + \frac{(\Delta t)^2}{2} M \end{aligned} $$

where \( M = \max_t \|\mathbf{x}''(t)\| \).

Assuming \( \mathbf{x}_0 = \mathbf{x}(t_0) \) is known exactly, we obtain \( \|\mathbf{e}_0\| = 0 \). Solving the resulting recurrence, we get the error at the \( k \)-th step:

$$ \|\mathbf{e}_k\| \leq \sum_{j=0}^{k-1}(1+ \Delta t \ L)^j \frac{(\Delta t)^2}{2}M $$

From the formula for the partial sum of a geometric sequence:

$$ \|\mathbf{e}_k\| \leq \frac{(1 + \Delta t \ L)^k - 1}{(1 + \Delta t \ L) - 1} \frac{(\Delta t)^2}{2}M = \frac{(\Delta t)}{2L}\big[ (1 + \Delta t L)^k - 1 \big] M $$

Using the expansion of the exponential function (\( 1 + x \leq e^x \)), hence \( 1 + \Delta t \ L \leq e^{\Delta t \ L} \):

$$ \|\mathbf{e}_k\| \leq \frac{\Delta t \ M}{2 L} \big[ (e^{\Delta t \ L})^k - 1 \big] = \frac{\Delta t \ M}{2 L} \big[ e^{L(t_k - t_0)} - 1 \big] $$

One can observe a key property of the Euler method: despite the fact that the error committed in a single iteration (local truncation error) is of order \( \mathcal{O}(\Delta t^2) \), the accumulation of errors causes the overall error (global truncation error) at the end of the interval to grow linearly with respect to the time step and is ultimately of order \( \mathcal{O}(\Delta t) \).


References